33 research outputs found

    Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study

    Get PDF
    An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand

    Development and External Validation of Individualized Prediction Models for Pain Intensity Outcomes in Patients With Neck Pain, Low Back Pain, or Both in Primary Care Settings

    Get PDF
    OBJECTIVE: The purpose of this study was to develop and externally validate multivariable prediction models for future pain intensity outcomes to inform targeted interventions for patients with neck or low back pain in primary care settings.METHODS: Model development data were obtained from a group of 679 adults with neck or low back pain who consulted a participating United Kingdom general practice. Predictors included self-report items regarding pain severity and impact from the STarT MSK Tool. Pain intensity at 2 and 6 months was modeled separately for continuous and dichotomized outcomes using linear and logistic regression, respectively. External validation of all models was conducted in a separate group of 586 patients recruited from a similar population with patients' predictor information collected both at point of consultation and 2 to 4 weeks later using self-report questionnaires. Calibration and discrimination of the models were assessed separately using STarT MSK Tool data from both time points to assess differences in predictive performance.RESULTS: Pain intensity and patients reporting their condition would last a long time contributed most to predictions of future pain intensity conditional on other variables. On external validation, models were reasonably well calibrated on average when using tool measurements taken 2 to 4 weeks after consultation (calibration slope = 0.848 [95% CI = 0.767 to 0.928] for 2-month pain intensity score), but performance was poor using point-of-consultation tool data (calibration slope for 2-month pain intensity score of 0.650 [95% CI = 0.549 to 0.750]).CONCLUSION: Model predictive accuracy was good when predictors were measured 2 to 4 weeks after primary care consultation, but poor when measured at the point of consultation. Future research will explore whether additional, nonmodifiable predictors improve point-of-consultation predictive performance.IMPACT: External validation demonstrated that these individualized prediction models were not sufficiently accurate to recommend their use in clinical practice. Further research is required to improve performance through inclusion of additional nonmodifiable risk factors.</p

    Development and validation of prediction models to estimate risk of primary total hip and knee replacements using data from the UK : two prospective open cohorts using the UK Clinical Practice Research Datalink

    Get PDF
    Altres ajuts: This study was funded by NIHR School for Primary Care Research Funding Round 9 (Project No: 258) and by Public Health England. CDM is funded by the NIHR Collaborations for Leadership in Applied Health Research and Care West Midlands, the NIHR School for Primary Care Research and a NIHR Research Professorship in General Practice (NIHR-RP-2014-04-026). JE is a NIHR Academic Clinical Lecturer. The views expressed in this paper are those of the author(s) and not necessarily those of the NHS, the NIHR, Public Health England, or the Department of Health. This research is funded by the National Institute for Health Research School for Primary Care Research (NIHR SPCR).The ability to efficiently and accurately predict future risk of primary total hip and knee replacement (THR/TKR) in earlier stages of osteoarthritis (OA) has potentially important applications. We aimed to develop and validate two models to estimate an individual's risk of primary THR and TKR in patients newly presenting to primary care. We identified two cohorts of patients aged ≥40 years newly consulting hip pain/OA and knee pain/OA in the Clinical Practice Research Datalink. Candidate predictors were identified by systematic review, novel hypothesis-free 'Record-Wide Association Study' with replication, and panel consensus. Cox proportional hazards models accounting for competing risk of death were applied to derive risk algorithms for THR and TKR. Internal-external cross-validation (IECV) was then applied over geographical regions to validate two models. 45 predictors for THR and 53 for TKR were identified, reviewed and selected by the panel. 301 052 and 416 030 patients newly consulting between 1992 and 2015 were identified in the hip and knee cohorts, respectively (median follow-up 6 years). The resultant model C-statistics is 0.73 (0.72, 0.73) and 0.79 (0.78, 0.79) for THR (with 20 predictors) and TKR model (with 24 predictors), respectively. The IECV C-statistics ranged between 0.70-0.74 (THR model) and 0.76-0.82 (TKR model); the IECV calibration slope ranged between 0.93-1.07 (THR model) and 0.92-1.12 (TKR model). Two prediction models with good discrimination and calibration that estimate individuals' risk of THR and TKR have been developed and validated in large-scale, nationally representative data, and are readily automated in electronic patient records

    Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models:a review shows improvements are needed

    Get PDF
    OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement.STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs.RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD.CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.</p

    Development and validation of prediction models to estimate risk of primary total hip and knee replacements using data from the UK: two prospective open cohorts using the UK Clinical Practice Research Datalink

    Get PDF
    Abstract Objectives The ability to efficiently and accurately predict future risk of primary total hip and knee replacement (THR/TKR) in earlier stages of osteoarthritis (OA) has potentially important applications. We aimed to develop and validate two models to estimate an individual’s risk of primary THR and TKR in patients newly presenting to primary care. Methods We identified two cohorts of patients aged ≥40 years newly consulting hip pain/OA and knee pain/OA in the Clinical Practice Research Datalink. Candidate predictors were identified by systematic review, novel hypothesis-free ‘Record-Wide Association Study’ with replication, and panel consensus. Cox proportional hazards models accounting for competing risk of death were applied to derive risk algorithms for THR and TKR. Internal–external cross-validation (IECV) was then applied over geographical regions to validate two models. Results 45 predictors for THR and 53 for TKR were identified, reviewed and selected by the panel. 301 052 and 416 030 patients newly consulting between 1992 and 2015 were identified in the hip and knee cohorts, respectively (median follow-up 6 years). The resultant model C-statistics is 0.73 (0.72, 0.73) and 0.79 (0.78, 0.79) for THR (with 20 predictors) and TKR model (with 24 predictors), respectively. The IECV C-statistics ranged between 0.70–0.74 (THR model) and 0.76–0.82 (TKR model); the IECV calibration slope ranged between 0.93–1.07 (THR model) and 0.92–1.12 (TKR model). Conclusions Two prediction models with good discrimination and calibration that estimate individuals’ risk of THR and TKR have been developed and validated in large-scale, nationally representative data, and are readily automated in electronic patient records. This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made

    Prediction models for diagnosis and prognosis of covid-19: : systematic review and critical appraisal

    Get PDF
    Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity. Funding: LW, BVC, LH, and MDV acknowledge specific funding for this work from Internal Funds KU Leuven, KOOR, and the COVID-19 Fund. LW is a postdoctoral fellow of Research Foundation-Flanders (FWO) and receives support from ZonMw (grant 10430012010001). BVC received support from FWO (grant G0B4716N) and Internal Funds KU Leuven (grant C24/15/037). TPAD acknowledges financial support from the Netherlands Organisation for Health Research and Development (grant 91617050). VMTdJ was supported by the European Union Horizon 2020 Research and Innovation Programme under ReCoDID grant agreement 825746. KGMM and JAAD acknowledge financial support from Cochrane Collaboration (SMF 2018). KIES is funded by the National Institute for Health Research (NIHR) School for Primary Care Research. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. GSC was supported by the NIHR Biomedical Research Centre, Oxford, and Cancer Research UK (programme grant C49297/A27294). JM was supported by the Cancer Research UK (programme grant C49297/A27294). PD was supported by the NIHR Biomedical Research Centre, Oxford. MOH is supported by the National Heart, Lung, and Blood Institute of the United States National Institutes of Health (grant R00 HL141678). ICCvDH and BCTvB received funding from Euregio Meuse-Rhine (grant Covid Data Platform (coDaP) interref EMR187). The funders played no role in study design, data collection, data analysis, data interpretation, or reporting.Peer reviewedPublisher PD

    Clinical Prediction Models to Predict the Risk of Multiple Binary Outcomes: a comparison of approaches

    Get PDF
    Clinical prediction models (CPMs) can predict clinically relevant outcomes or events. Typically, prognostic CPMs are derived to predict the risk of a single future outcome. However, there are many medical applications where two or more outcomes are of interest, meaning this should be more widely reflected in CPMs so they can accurately estimate the joint risk of multiple outcomes simultaneously. A potentially naïve approach to multi-outcome risk prediction is to derive a CPM for each outcome separately, then multiply the predicted risks. This approach is only valid if the outcomes are conditionally independent given the covariates, and it fails to exploit the potential relationships between the outcomes. This paper outlines several approaches that could be used to develop CPMs for multiple binary outcomes. We consider four methods, ranging in complexity and conditional independence assumptions: namely, probabilistic classifier chain, multinomial logistic regression, multivariate logistic regression, and a Bayesian probit model. These are compared with methods that rely on conditional independence: separate univariate CPMs and stacked regression. Employing a simulation study and real-world example, we illustrate that CPMs for joint risk prediction of multiple outcomes should only be derived using methods that model the residual correlation between outcomes. In such a situation, our results suggest that probabilistic classification chains, multinomial logistic regression or the Bayesian probit model are all appropriate choices. We call into question the development of CPMs for each outcome in isolation when multiple correlated or structurally related outcomes are of interest and recommend more multivariate approaches to risk prediction
    corecore